A polarization camera has great potential for 3D reconstruction since the angle of polarization (AoP) and the degree of polarization (DoP) of reflected light are related to an object's surface normal. In this paper, we propose a novel 3D reconstruction method called Polarimetric Multi-View Inverse Rendering (Polarimetric MVIR) that effectively exploits geometric, photometric, and polarimetric cues extracted from input multi-view color-polarization images. We first estimate camera poses and an initial 3D model by geometric reconstruction with a standard structure-from-motion and multi-view stereo pipeline. We then refine the initial model by optimizing photometric rendering errors and polarimetric errors using multi-view RGB, AoP, and DoP images, where we propose a novel polarimetric cost function that enables an effective constraint on the estimated surface normal of each vertex, while considering four possible ambiguous azimuth angles revealed from the AoP measurement. The weight for the polarimetric cost is effectively determined based on the DoP measurement, which is regarded as the reliability of polarimetric information. Experimental results using both synthetic and real data demonstrate that our Polarimetric MVIR can reconstruct a detailed 3D shape without assuming a specific surface material and lighting condition.
translated by 谷歌翻译
Neural transducer is now the most popular end-to-end model for speech recognition, due to its naturally streaming ability. However, it is challenging to adapt it with text-only data. Factorized neural transducer (FNT) model was proposed to mitigate this problem. The improved adaptation ability of FNT on text-only adaptation data came at the cost of lowered accuracy compared to the standard neural transducer model. We propose several methods to improve the performance of the FNT model. They are: adding CTC criterion during training, adding KL divergence loss during adaptation, using a pre-trained language model to seed the vocabulary predictor, and an efficient adaptation approach by interpolating the vocabulary predictor with the n-gram language model. A combination of these approaches results in a relative word-error-rate reduction of 9.48\% from the standard FNT model. Furthermore, n-gram interpolation with the vocabulary predictor improves the adaptation speed hugely with satisfactory adaptation performance.
translated by 谷歌翻译
背景:心肌灌注SPECT(MPS)对左心室(LV)功能的评估依赖于准确的心肌分割。本文的目的是开发和验证一种新的方法,该方法将深度学习与形状先验结合在一起,以精确提取LV心肌以自动测量LV功能参数。方法:开发了与形状变形模块集成三维(3D)V-NET的分割体系结构。使用动态编程(DP)算法生成的形状先验,然后在模型训练期间限制并指导模型输出,以快速收敛和改善性能。分层的5倍交叉验证用于训练和验证我们的模型。结果:我们提出的方法的结果与地面真理的结果一致。我们提出的模型的骰子相似性系数(DSC)为0.9573(0.0244),0.9821(0.0137)和0.9903(0.0041),Hausdorff距离(HD)6.7529(2.7334)(2.7334)mm,7.2507(3.2507(3.1952)MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM MM,和7.6122 3.0134)MM分别提取心内膜,心肌和心外膜。结论:我们提出的方法在提取LV心肌轮廓和评估LV功能方面具有很高的精度。
translated by 谷歌翻译
上下文偏见是端到端自动语音识别(ASR)系统的一项重要且具有挑战性现有方法主要包括上下文lm偏置,并将偏置编码器添加到端到端的ASR模型中。在这项工作中,我们介绍了一种新颖的方法,通过在端到端ASR系统之上添加上下文拼写校正模型来实现上下文偏见。我们将上下文信息与共享上下文编码器合并到序列到序列拼写校正模型中。我们提出的模型包括两种不同的机制:自动回旋(AR)和非自动回旋(NAR)。我们提出过滤算法来处理大尺寸的上下文列表以及性能平衡机制,以控制模型的偏置程度。我们证明所提出的模型是一种普遍的偏见解决方案,它是对域的不敏感的,可以在不同的情况下采用。实验表明,所提出的方法在ASR系统上的相对单词错误率(WER)降低多达51%,并且优于传统偏见方法。与AR溶液相比,提出的NAR模型可将模型尺寸降低43.2%,并将推断加速2.1倍。
translated by 谷歌翻译
现有方法以非可分子点检测关键点,因此它们不能直接通过背部传播优化关键点的位置。为解决此问题,我们呈现了一个可差异的关键点检测模块,其输出精确的子像素键点。然后提出了再分断损耗直接优化这些子像素键点,并且呈现了分散峰值损耗以获得准确的关键点正则化。我们还以子像素方式提取描述符,并通过稳定的神经输注误差丢失训练。此外,轻量化网络被设计用于关键点检测和描述符提取,其可以在商业GPU上以每秒95帧运行为95帧。在同性记估计,相机姿态估计和视觉(重新)定位任务中,所提出的方法通过最先进的方法实现了相同的性能,而大大减少了推理时间。
translated by 谷歌翻译
Self-supervised learning (SSL) methods such as WavLM have shown promising speech separation (SS) results in small-scale simulation-based experiments. In this work, we extend the exploration of the SSL-based SS by massively scaling up both the pre-training data (more than 300K hours) and fine-tuning data (10K hours). We also investigate various techniques to efficiently integrate the pre-trained model with the SS network under a limited computation budget, including a low frame rate SSL model training setup and a fine-tuning scheme using only the part of the pre-trained model. Compared with a supervised baseline and the WavLM-based SS model using feature embeddings obtained with the previously released 94K hours trained WavLM, our proposed model obtains 15.9% and 11.2% of relative word error rate (WER) reductions, respectively, for a simulated far-field speech mixture test set. For conversation transcription on real meeting recordings using continuous speech separation, the proposed model achieves 6.8% and 10.6% of relative WER reductions over the purely supervised baseline on AMI and ICSI evaluation sets, respectively, while reducing the computational cost by 38%.
translated by 谷歌翻译
Automatic Speech Recognition (ASR) systems typically yield output in lexical form. However, humans prefer a written form output. To bridge this gap, ASR systems usually employ Inverse Text Normalization (ITN). In previous works, Weighted Finite State Transducers (WFST) have been employed to do ITN. WFSTs are nicely suited to this task but their size and run-time costs can make deployment on embedded applications challenging. In this paper, we describe the development of an on-device ITN system that is streaming, lightweight & accurate. At the core of our system is a streaming transformer tagger, that tags lexical tokens from ASR. The tag informs which ITN category might be applied, if at all. Following that, we apply an ITN-category-specific WFST, only on the tagged text, to reliably perform the ITN conversion. We show that the proposed ITN solution performs equivalent to strong baselines, while being significantly smaller in size and retaining customization capabilities.
translated by 谷歌翻译
With the development of depth sensors in recent years, RGBD object tracking has received significant attention. Compared with the traditional RGB object tracking, the addition of the depth modality can effectively solve the target and background interference. However, some existing RGBD trackers use the two modalities separately and thus some particularly useful shared information between them is ignored. On the other hand, some methods attempt to fuse the two modalities by treating them equally, resulting in the missing of modality-specific features. To tackle these limitations, we propose a novel Dual-fused Modality-aware Tracker (termed DMTracker) which aims to learn informative and discriminative representations of the target objects for robust RGBD tracking. The first fusion module focuses on extracting the shared information between modalities based on cross-modal attention. The second aims at integrating the RGB-specific and depth-specific information to enhance the fused features. By fusing both the modality-shared and modality-specific information in a modality-aware scheme, our DMTracker can learn discriminative representations in complex tracking scenes. Experiments show that our proposed tracker achieves very promising results on challenging RGBD benchmarks. Code is available at \url{https://github.com/ShangGaoG/DMTracker}.
translated by 谷歌翻译
End-to-end formulation of automatic speech recognition (ASR) and speech translation (ST) makes it easy to use a single model for both multilingual ASR and many-to-many ST. In this paper, we propose streaming language-agnostic multilingual speech recognition and translation using neural transducers (LAMASSU). To enable multilingual text generation in LAMASSU, we conduct a systematic comparison between specified and unified prediction and joint networks. We leverage a language-agnostic multilingual encoder that substantially outperforms shared encoders. To enhance LAMASSU, we propose to feed target LID to encoders. We also apply connectionist temporal classification regularization to transducer training. Experimental results show that LAMASSU not only drastically reduces the model size but also outperforms monolingual ASR and bilingual ST models.
translated by 谷歌翻译
In this paper, we introduce our work of building a Streaming Multilingual Speech Model (SM2), which can transcribe or translate multiple spoken languages into texts of the target language. The backbone of SM2 is Transformer Transducer, which has high streaming capability. Instead of human labeled speech translation (ST) data, SM2 models are trained using weakly supervised data generated by converting the transcriptions in speech recognition corpora with a machine translation service. With 351 thousand hours of anonymized speech training data from 25 languages, SM2 models achieve comparable or even better ST quality than some recent popular large-scale non-streaming speech models. More importantly, we show that SM2 has the truly zero-shot capability when expanding to new target languages, yielding high quality ST results for {source-speech, target-text} pairs that are not seen during training.
translated by 谷歌翻译